Search Results: "mdz"

12 September 2010

Alastair McKinstry: Exoclimes: the diversity of planetary scientists

I'm just back from ExoClimes 2010: Exploring the Diversity of Planetary Atmospheres. An excellent conference: the PDFs of the talks and posters are now online, and they are putting the videos of the talks up soon. But in particular the organizers deserves thanks for bringing exoplanetary scientists and observers together with climate modelers doing Earth (and Mars, Titan, Venus, ...) models.
Model complexity graph Peter Cox on model complexity
The last talk on Friday was by Peter Cox on Climate change and exoplanet sciences that was far better than expected for the "graveyard shift". One theme of the conference was the need for a 'heirarchy' of models, from simple energy-balance models to full circulation (GCM) models: using progressively more complex models to understand more bits of whats going on. Exoplanet workers mostly use simpler models, progressing now to GCMs, while Earth modellers are moving beyond GCMs to "Earth system" models including biology, etc. Peter pointed out the two styles of work: the exoplanet modelers are short of data, and risk being too speculative. We know little of what the planets are like, and concentrate on implementing physics in the models to see what they might be like. Earth modelers on the other hand are if anything swamped with data: the tendency here is to make the model fit the data, by adjusting parameters until it does so. The danger of this approach is that the model will then not work away from current present-Earth conditions. Tim Lenton pointed out some work that was done with the Met Office model, where they took the radiative transfer part of the model and tested it for other planets, and paleo-Earth conditions. The model blew up : it wasn't capable of x2 or x4 current CO2 levels. (This has since been corrected). Over dinner there were interesting discussions on the different styles within the communities. While the underlying GCMs used come from the Earth sciences, its quite common within the exoplanetary community for a researcher to work on all parts of the model: dynamics one day, radiative transfer the next. In Earth climate work people have become more specialized and someone is a 'radiative transfer' person, and won't touch other parts of the code (even if they can follow them in the huge codes we have today!). On the other hand, there is a greater tradition of model inter-comparison in Earth sciences, where we compare the model outputs to each other for some known test cases ( Held & Suarez, the CMIP5 project, etc.) Apart from some initial work by Emily Rauscher, little has been done on this in exoplanetary models; it was agreed more of this would be a good idea. Radiative transfer (the interaction of 'sunlight' with the atmosphere, where it gets absorbed, scattered and re-radiated) in particular seems to be an area that could benefit from this. In this middle ground Francois Forget showed the work on the LMDZ model and applying GCMs to terrestrial planets. They've successfully applied this model to Mars, Titan, and partially to Venus (a much tougher problem, due to its heavy clouds giving a long radiative timescale). There are problems with correctly explaining super-rotation though. This is where the atmosphere rotates faster than the planet: on Venus for example the planet rotates every 243 days, while the clouds rotate around the planet every 4 days. Sebastian Lebonnois described the possible mechanisms for Venus and Titan; Johnathan Mitchell so did some interesting work on this recently. Different regimes are involved for different rotation rates of the planet. Ralph Lorenz pointed out the lack of "real paleo-Earth" climate work at the moment. While geology has inspired a lot of work on the atmospheric composition, what with the different gas mixtures (meaning earth-model radiative transfer codes don't work) and the faster dynamics meaning super-rotation could apply (Earth's day was about 8 hours long in the Archean era), we don't have a model of the climate yet. It looks like we should treat Earth as an exoplanet. Tags , , ,

30 August 2010

Matt Zimmerman: Traveling at home

For me, the most enjoyable part of traveling is the inspiration that I derive from visiting different places, talking to people, and generally being outside of my normal environment. This bank holiday weekend, when so many Londoners visit faraway lands, my partner and I stayed in London instead, and my sought inspiration closer to home. The city has been delightfully quiet, and in contrast to the preceding week, the weather was mostly pleasant, apart from the sudden downpours the BBC described as squally showers .
Photo of deer in Richmond Park

Photo credit: M rcio Cabral de Moura


We spent Saturday afternoon in Richmond Park, a 2500-acre nature preserve easily accessible via public transport from London. The plentiful oak trees, fallow deer, and various species of water fowl made it easy to forget the city for a while. Having visited a few times on foot, I think it would be fun to cycle next time, and see different areas of the park. Afterward, we had dinner at a tapas restaurant in Parsons Green which offered notably excellent service as well as good food. By this time, it was nearly 7:00pm, and we took a chance on getting last-minute theatre tickets to see Jeff Goldblum and Mercedes Ruehl in Neil Simon s The Prisoner of Second Avenue. We arrived at the theatre just in time for the show, which was not sold out, and in fact had quite reasonable seats available. The show had several good laughs, holding up fairly well after nearly 40 years since the original Broadway production.
Photo of the exhibition at the Design Museum

Photo credit: Gary Bembridge


On Sunday, we visited the Design Museum for the first time. Having been disappointed by the nearby Fashion and Textile Museum, our expectations were not too high, but it turned out to be very worthwhile. The Brit Insurance Designs of the Year exhibition showcased designs from architecture, fashion, furniture, transport and more. Some of my favorites were: I was delighted to see that there were a half dozen or so exhibits which related to open source software. Even including the theatre tickets, it was a very inexpensive holiday compared to traveling overseas, and generated a lot less CO2. I was more than satisfied with the inspiration available within a relatively small radius. I don t think I ll give up traveling, as I really enjoy seeing friends who live far away, but I think I ll be more inclined to stay home during peak travel times and enjoy local activities.

25 August 2010

Matt Zimmerman: DebConf 10: Last day and retrospective

DebConf continued until Saturday, but Friday the 6th was my last day as I left New York that evening. I m a bit late in getting this summary written up. Making Debian Rule, Again (Margarita Manterola) Marga took a bold look at the challenges facing Debian today. She says that Debian is perceived to be less innovative, out of date, difficult to use, and shrinking as a community. She called out Ubuntu as the elephant in the room , which is taking away from Debian. She insists that she is not opposed to Ubuntu, but that nonetheless Ubuntu is to some extent displacing Debian as a focal point for newcomers (both users and contributors). Marga points out that Debian s work is still meaningful, because many users still prefer Debian, and it is perceived to be of higher quality, as well as being the essential basis for derivatives like Ubuntu. She conducted a survey (about 40 respondents) to ask what Debian s problems are, and grouped them into categories like motivation and communication (tied for the #1 spot), visibility (#3, meaning public awareness and perception of Debian) and so on. She went on to make some suggestions about how to address these problems. On the topic of communication, she proposed changing Debian culture by: This stimulated a lot of discussion, and most of the remaining time was taken up by comments from the audience. The video has been published, and offers a lot of insight into how Debian developers perceive each other and the project. She also made suggestions for the problems of visibility and motivation. These are crucial issues for Debian devotees to be considering, and I applaud Marga for her fortitude in drawing attention to them. This session was one of the highlights of this DebConf, and catalyzed a lot of discussion of vital issues in Debian. Following her talk, there was a further discussion in the hallway which included many of the people who commented during the session, mostly about how to deal with problematic behavior in Debian. Although I agreed with much of what was said, I found it a bit painful to watch, because (ironically) this discussion displayed several of the characteristic people problems that Debian seems to have: These same patterns are easily observed on Debian mailing lists for the past 10+ years. I exhibited them myself when I was active on these lists. This kind of cultural norm, once established, is difficult to intentionally change. It requires a fairly radical approach, which will inevitably mean coping with loss. In the case of a community, this can mean losing volunteer contributors cannot let go of this norm, and that is an emotionally difficult experience. However, it is nonetheless necessary to move forward, and I think that Debian as a community is capable of moving beyond it. Juxtaposition Given my history with both Debian and Ubuntu, I couldn t help but take a comparative view of some of this. These problems are not new to Debian, and indeed they inspired many of the key decisions we made when founding the Ubuntu project in 2004. We particularly wanted to foster a culture which was supportive, encouraging and welcoming to potential contributors, something Debian has struggled with. Ubuntu has been, quite deliberately, an experiment in finding solutions to problems such as these. We ve learned a lot from this experiment, and I ve always hoped that this would help to find solutions for Debian as well. Unfortunately, I don t think Debian has benefited from these Ubuntu experiments as much as we might have hoped. A common example of this is the Ubuntu Code of Conduct. The idea of a project code of conduct predates Ubuntu, of course, but we did help to popularize it within the free software community, and this is now a common (and successful) practice used by many free software projects. The idea of behavioral standards for Debian has been raised in various forms for years now, but never seems to get traction. Hearing people talk about it at DebConf, it sometimes seemed almost as if the idea was dismissed out of hand because it was too closely associated with Ubuntu. I learned from Marga s talk that Enrico Zini drafted a set of Debian Community Guidelines over four years ago in 2006. It is perhaps a bit longand structured, but is basically excellent. Enrico has done a great job of compiling best practices for participating in an open community project. However, his document seems to be purely informational, without any official standing in the Debian project, and Debian community leaders have hesitated to make it something more. Perhaps Ubuntu leaders (myself included) could have done more to nurture these ideas in Debian. At least in my experience, though, I found that my affiliation with Ubuntu almost immediately labeled me an outsider in Debian, even when I was still active as a developer, and this made it very difficult to make such proposals. Perhaps this is because Debian is proud of its independence, and does not want to be unduly influenced by external forces. Perhaps the initial growing pains of the Debian/Ubuntu relationship got in the way. Nonetheless, I think that Debian could be stronger by learning from Ubuntu, just as Ubuntu has learned so much from Debian. Closing thoughts I enjoyed this DebConf very much. This was the first DebConf to be hosted in the US, and there were many familiar faces that I hadn t seen in some time. Columbia University offered an excellent location, and the presentation content was thought-provoking. There seemed to be a positive attitude toward Ubuntu, which was very good to see. Although there is always more work to do, it feels like we re making progress in improving cooperation between Debian and Ubuntu. I was a bit sad to leave, but was fortunate enough to meet up with Debian folk during my subsequent stay in the Boston area as well. It felt good to reconnect with this circle of friends again, and I hope to see you again soon. Looking forward to next year s DebConf in Bosnia

5 August 2010

Matt Zimmerman: DebConf 10: Day 3

How We Can Be the Silver Lining of the Cloud (Eben Moglen) Eben s talk was on the same topic as his Internet Society talk in February, which I had downloaded and watched some time ago. He challenges the free software community to develop the software to power the freedom box , a small, efficient and inexpensive personal server. Such a system would put users more in control of their online lives, give them better protection for it under the law, and provide a platform for many new federated services. It sounds like a very interesting project, which I d like to write more about. Statistical Machine Learning Analysis of Debian Mailing Lists (Hanna Wallach) Hanna is bringing together her interests in machine learning and free software by using machine learning techniques to analyze of publicly available data from free software communities. In doing so, she hopes to develop tools for studying the patterns of collaboration, innovation and other behavior in these communities. Her methodology uses statistical topic models, which infer the topic of a document based on the occurrence of topical words, to group Debian mailing list posts by topic. Her example analyzed posts from the debian-project and debian-women mailing lists, inferring a set of topics and categorizing all of the posts according to which topic(s) were represented in them. Using this data, she could plot over time the frequency of discussion of each topic, which revealed interesting patterns. The audience quickly zoned in on practical applications for things like flamewar and troll detection. Debian Derivatives BoF (Matt Zimmerman) I organized this discussion session to share perspectives on Debian derivatives, in particular how we can improve cooperation between derivatives and Debian itself. The room was a bit hard to find, so attendance was relatively small, but this turned out to be a plus. With a smaller group, we were able to get acquainted with each other, and everyone participated. Unsurprisingly, there were many more representatives from Ubuntu than other derivatives, and I was concerned that Ubuntu would dominate the discussion. It did, but I tried to draw out perspectives from other derivatives where possible. On the whole, the tone was positive and constructive. This may be due in part to people self-selecting for the BoF, but I think there is a lot of genuine goodwill between Debian and Ubuntu. Stefano Zacchiroli took notes in Gobby during the session, which I expect he will post somewhere public when he has a chance.

2 August 2010

Matt Zimmerman: DebConf 10: Day 2

Today was the first day of DebConf proper, where all of the sessions were aimed at project participants. Bits from the DPL (Stefano Zacchiroli) Stefano delivered an excellent address to the Debian project. As Project Leader, he offered a perspective on how far Debian has come, raised some of the key questions facing Debian today, and challenged the project to move forward and improve in several important ways. He asked the audience: Is Debian better than other distributions? Is Debian still relevant? Why/how? Having asked this question on identi.ca and Twitter recently, he presented a summary. There was a fairly standard list of technical concerns, but also: He pointed out some areas which we would like to see improve, including: All in all, I thought this was an accurate, timely and inspirational message for the project, and the talk is worth watching for any current or prospective contributor to Debian. Debian Policy BoF (Russ Albery) Russ facilitated a discussion about the Debian policy document itself and the process for managing it. He has recently put in a lot of time working on the backlog (down from 160+ to 120), but this is not sustainable for him, and help is needed. There was a wide-ranging discussion of possible improvements including: There was also some discussion in passing of the long-standing confusion (presumably among people new to the project) with regard to how policy is established. In Debian, best practices are first implemented in packages, then documented in policy (not the reverse). Sometimes, improvements are suggested at the policy level, when they need to start elsewhere. I m not very familiar with how the policy manual is maintained at present, but listening to the discussion, it sounded like it might help to extend the process to include the implementation stage. This would allow standards improvements to be tracked all the way through from concept, to implementation, to documentation. The Java Packaging Nightmare (Torsten Werner) Torsten described the current state of Java packaging in Debian and the general problems involved, including licensing issues, build system challenges (e.g. maven) and dependency management. His slides were information-dense, so I didn t take a lot of notes. His presentation inspired a lively discussion about why upstream developers of Java applications and libraries often do not engage with Debian. Suggested reasons included: Collaboration between Ubuntu and Debian (Jorge Castro) Jorge talked about the connections between Debian and Ubuntu, how people in the projects perceive each other, and how to foster good relationships between developers. He talked about past efforts to quantify collaboration between the projects, but the focus is now on building personal relationships. There were many good questions and comments afterward, and I m looking forward to the Debian derivatives BoF session tomorrow to get into more detail. Tonight is the traditional wine and cheese party. When this tradition started, I was one of just a handful of people in a room with some cheese and paper plates, but it s now a large social gathering with contributions of cheese and wine from around the world. I m looking forward to it.

Matt Zimmerman: DebConf 10: Day 1

This week, I am attending DebConf 10 at Columbia University in New York. The first day of DebConf is known as Debian Day. While most of DebConf is for the benefit of people involved in Debian itself, Debian Day is aimed at a wider audience, and invites the public to learn about, and interact with, the Debian project. These are the talks I attended. Debian Day Opening Plenary (Gabriella Coleman, Hans-Christoph Steiner) Hans-Christoph discussed Debian and free software from a big picture perspective: why software freedom matters, challenging the producer/consumer dichotomy, how the Debian ecosystem hangs together, and so on. Steps to adopting F/OSS in government (Andy Oram) Andy discussed FLOSS adoption in governments, drawing on examples from Peru, the city of Munich, the state of Massachusetts. He covered the reasons why this is valuable, the relationship between government transparency and software freedom, and practical advice for successful adoption and deployment. Pedagogical Freedom (panel, Jonah Bossewitch et al) The panelists discussed the use of technology in education, especially free software, some of the parallels between free software and education, and what these communities could learn from each other. This is a promising topic, though the perspectives seemed to be mostly from the education realm. There is much to be learned on both sides. Google Summer of Code 2010 at Debian (Obey Arthur Liu) This talk covered the student projects for this year s Summer of Code. Most of the students were in attendance, and presented their own work. They ranged from more specialized projects like the Hurd installer, to core infrastructure improvements like multi-arch in APT. Beyond Sharing: Open Source Design (Mushon Zer-Aviv) Mushon gave an excellent talk on open design. This is a subject I ve thought quite a bit about, and Asheesh validated many of my conclusions from a different angle. I ve added a new post to my todo list to go into more detail on this subject. Some points from his talk which resonated with me: How Government can Foster Freedom in Technology (Hon. Gale Brewer) Councillor Brewer paid a visit to DebConf to tell us about the work she is doing on the city council to promote better government through technology. Brewer seems to be a strong advocate of open data, saying essentially that all government data should be public. She summarized a bill to mandate that New York City government data be public, shared in raw form using open standards, and kept up to date. It sounded like a very strong move which would encourage third party innovation around the data. She also discussed the need for greater access to computers and Internet connectivity, particularly in educational settings, and a desire to to have all public hearings and meetings shared online. Why is GNU/Linux Like a Player Piano? (Jon Anderson Hall, Esq.) Jon is a very engaging speaker. He drew parallels between the development of player pianos, reproducing pianos, reed organs, pipe organs and free software. He even tied in Hedi Lamarr s work which led to spread spectrum wireless technology. To be quite honest, I did not find that these analogies taught me much about either free software or player pianos, but nonetheless, I couldn t help but take an interest in what he was saying and how he presented it. DebConf Opening Plenary (Gabriella Coleman) Biella and company explained all the ins and outs of the event: where to go, what to do (and not do), and most importantly, whom to thank for all of it. Now in its 11th year, DebConf is an impressively well-run conference. I m looking forward to the rest of the week!

26 July 2010

Matt Zimmerman: Embracing the Web

The web offers a compelling platform for developing modern applications. How can free software benefit more from web technology, and at the same time promote more software freedom on the web? What would the world be like if FLOSS web applications were as plentiful and successful as traditional FLOSS applications are today? Web architecture The web, as a collection of interlinked hypertext documents available on the Internet, has been well established for over a decade. However, the web as an application architecture is only just hitting its stride. With modern tools and frameworks, it s relatively straightforward to build rich applications with browser-oriented frontends and HTTP-accessible backends. This architecture has its limitations, of course: browser compatibility nightmares, limited offline capabilities, network latency, performance challenges, server-side scalability, complicated multimedia story, and so on. Most of these are slowly but surely being addressed or ameliorated as web technology improves. However, for a large class of applications, these limitations are easily outweighed by the advantages: cross-platform support, instantaneous upgrades, global availability, etc. The web enables developers to reach the largest audience of users with the most compelling functionality, and simplifies users lives by giving them immediate access to their digital lives from anywhere. Some web advocates would go so far as to say that if an application can be built for the web, it should be built for the web because it will be more successful. It s no surprise that new web applications are being developed at a staggering rate, and I expect this trend to continue. So what? This trend represents a significant threat, and a corresponding opportunity, to free software. Relatively few web applications are free software, and relatively few free software applications are built for the web. Therefore, the momentum which is leading developers and users to the web is also leading them (further) away from free software. Traditionally, pragmatists have adopted free software applications because they offered immediate gratification: it s much faster and easier to install a free software application than to buy a proprietary one. The SaaS model of web applications offers the same (and better) immediacy, so free software has lost some of its appeal among pragmatists, who instead turn to proprietary web applications. Why install and run a heavyweight client application when you can just click a link? Many web applications perhaps even a majority are built using free software, but are not themselves free. A new generation of developers share an appreciation for free software tools and frameworks, but see little value in sharing their own software. To these developers, free software is something you use, not something you make. Free software cannot afford to ignore the web. Instead, we should embrace the web more completely, more powerfully, and more effectively than proprietary systems do. What would that look like? In my view, a FLOSS client platform which fully embraced the web would: Imagine a world where free web applications are as plentiful and malleable as free native applications are today. Developers would be able to branch, test and submit patches to them. What about Chrome OS? Chrome OS is a step in the right direction, but doesn t yet realize this vision. It s a traditional operating system which is stripped down and focused on running one application (a web browser) very, very well. In some ways, it elevates web applications to first-class status, though its paradigm is still fundamentally that of a web browser. It is not designed for development, but for consuming the web. Developers who want to create and deploy web applications must use a more traditional operating system to do so. It does not put the end user in control. On the contrary, the user is almost entirely dependent on SaaS applications for all of their needs. Although it is constructed using free software, it does not seem to deliver the principles or benefits of software freedom to the web itself. How? Just as free software was bootstrapped on proprietary UNIX, the present-day web is fertile ground for the development of free web applications. The web is based on open standards. There are already excellent web development tools, web application frameworks and server software which are FLOSS. Leading-edge web browsers like Firefox and Chrome/Chromium, where much web innovation is happening today, are already open source. This is a huge head start toward a free web. I think what s missing is a client platform which catalyzes the development and use of FLOSS web applications.

12 July 2010

Matt Zimmerman: Read, listen, or comprehend: choose two

I have noticed that when I am reading, I cannot simultaneously understand spoken words. If someone speaks to me while I am reading, I can pay attention to their voice, or to the text, but not both. It s as if these two functions share the same cognitive facility, and this facility can only handle one task at a time. If someone is talking on the phone nearby, I find it very difficult to focus on reading (or writing). If I m having a conversation with someone about a document, I sometimes have to ask them to pause the conversation for a moment while I read. This phenomenon isn t unique to me. In Richard Feynman s What Do You Care what Other People Think?, there is a chapter entitled It s as Simple as One, Two, Three where he describes his experiments with keeping time in his head. He practiced counting at a steady rate while simultaneously performing various actions, such as running up and down the stairs, reading, writing, even counting objects. He discovered that he could do anything while counting to [himself] except talk out loud . What s interesting is that the pattern varies from person to person. Feynman shared his discovery with a group of people, one of whom (John Tukey) had a curiously different experience: while counting steadily, he could easily speak aloud, but could not read. Through experimenting and comparing their experiences, it seemed to them that they were using different cognitive processes to accomplish the task of counting time. Feynman was hearing the numbers in his head, while Tukey was seeing the numbers go by. Analogously, I ve met people who seem to be able to read and listen to speech at the same time. I attributed this to a similar cognitive effect: presumably some people speak the words to themselves, while others watch them. Feynman found that, although he could write and count at the same time, his counting would be interrupted when he had to stop and search for the right word. Perhaps he used a different mental faculty for that. Some people seem to be able to listen to more than one person talking at the same time, and I wonder if that s related. I was reminded of this years later, when I came across this video on speed reading. In it, the speaker explains that most people read by silently voicing words, which they can do at a rate of only 120-250 words per minute. However, people can learn to read visually instead, and thereby read much more quickly. He describes a training technique which involves reading while continuously voicing arbitrary sounds, like the vowels A-E-I-O-U. The interesting part, for me, was the possibility of learning. I realized that different people read in different ways, but hadn t thought much about whether one could change this. Having learned a cognitive skill, like reading or counting time, apparently one can re-learn it a different way. Visual reading would seem, at first glance, to be superior: not only is it faster, but I have to use my eyes to read anyway, so why tie up my listening facility as well? Perhaps I could use it for something else at the same time. So, I tried the simple technique in the video, and it had a definite effect. I could feel that I wasn t reading in the same way that I had been before. I didn t measure whether I was going any faster or slower, because I quickly noticed something more significant: my reading comprehension was completely shot. I couldn t remember what I had read, as the memory of it faded within seconds. Before reaching the end of a paragraph, I would forget the beginning. It was as if my ability to comprehend the meaning of the text was linked to my reading technique. I found this very unsettling, and it ruined my enjoyment of the book I was reading. I ll probably need to separate this practice from my pleasure reading in order to stick with it. Presumably, over time, my comprehension will improve. I m curious about what net effect this will have, though. Will I still comprehend it in the same way? Will it mean the same thing to me? Will I still feel the same way about it? The many levels of meaning are connected to our senses as well, and the same idea, depending on whether it was read or heard, may not have the same meaning to an individual. Even our tactile senses can influence our judgments and decisions. I also wonder whether, if I learn to read visually, I ll lose the ability to read any other way. When I retrained myself to type using a Dvorak keyboard layout, rather than QWERTY, I lost the ability to type on QWERTY at high speed. I think this has been a good tradeoff for me, but raises interesting questions about how my mind works: Why did this happen? What else changed in the process that might have been less obvious? Have you tried re-training yourself in this way? What kind of cognitive side effects did you notice, if any? If you lost something, do you still miss it? (As a sidenote, I am impressed by Feynman s exuberance and persistence in his personal experiments, as described in his books for laypeople. Although I consider myself a very curious person, I rarely invest that kind of physical and intellectual energy in first-hand experiments. I m much more likely to research what other people have done, and skim the surface of the subject.)

6 July 2010

Matt Zimmerman: We ve packaged all of the free software what now?

Today, virtually all of the free software available can be found in packaged form in distributions like Debian and Ubuntu. Users of these distributions have access to a library of thousands of applications, ranging from trivial to highly sophisticated software systems. Developers can find a vast array of programming languages, tools and libraries for constructing new applications. This is possible because we have a mature system for turning free software components into standardized modules (packages). Some software is more difficult to package and maintain, and I m occasionally surprised to find something very useful which isn t packaged yet, but in general, the software I want is packaged and ready before I realize I need it. Even the long tail of niche software is generally packaged very effectively. Thanks to coherent standards, sophisticated management tools, and the principles of software freedom, these packages can be mixed and matched to create complete software stacks for a wide range of devices, from netbooks to supercomputing clusters. These stacks are tightly integrated, and can be tested, released, maintained and upgraded as a unit. The Debian system is unparalleled for this purpose, which is why Ubuntu is based on it. The vision, for a free software operating system which is highly modular and customizable, has been achieved. Rough edges This is a momentous achievement, and the Debian packaging system fulfills its intended purpose very well. However, there are a number of areas where it introduces friction, because the package model doesn t quite fit some new problems. Most of these are becoming more common over time as technology evolves and changes shape. Why are we stuck?
I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.
Abraham Maslow
The packaging ecosystem is very strong. Not only do we have powerful tools for working with packages, we also benefit from packages being a well-understood concept, and having established processes for developing, exchanging and talking about them. Once something is packaged, we know what it is and how to work with it, and it fits into everything else. So, it is tempting to package everything in sight, as we already know how to make sense of packages. However, this may not always be the right tool for the job. Various attempts have been made to extend the packaging concept to make it more general, for example: Other package managers aim to solve a specific problem, such as providing lightweight package management for embedded systems, or lazy dependency installation, or fixing the filesystem hierarchy. There is a long list of package managers of various levels which solve different problems. Most of these systems suffer from an important fundamental tradeoff: they are designed to manage the entire system, from the kernel through applications, and so they must be used wholesale in order to reap their full benefit. In other words, in their world, everything is a package, and anything which is not a package is out of scope. Therefore, each of these systems requires a separate collection of packages, and each time we invent a new one, its adherents set about packaging everything in the new format. It takes a very long time to do this, and most of them lose momentum before a mature ecosystem can form around them. This lock-in effect makes it difficult for new packaging technologies to succeed. Divide and Conquer No single package management framework is flexible enough to accommodate all of the needs we have today. Even more importantly, a generic solution won t account for the needs we will have tomorrow. I propose that in order to move forward, we must make it possible to solve packaging problems separately, rather than attempting to solve them all within a single system. But I like things how they are! We don t have a choice. The world is changing around us, and distributions need to evolve with it. If we don t adapt, we will eventually give way to systems which do solve these problems. Take, for example, modern web browsers like Firefox and Chromium. Arguably the most vital application for users, the browser is coming under increasing pressure to keep up with the breakneck pace of innovation on the web. The next wave of real-time collaboration and multimedia applications relies on the rapid development of new capabilities in web browsers. Browser makers are responding by accelerating deployment in the field: both aggressively push new releases to their users. A report from Google found that Chrome upgrades 97% of their users within 21 days of a new release, and Firefox 85% (both impressive numbers). Mozilla recently changed their maintenance policies, discontinuing maintenance of stable releases and forcing Ubuntu to ship new upstream releases to users. These applications are just the leading edge of the curve, and the pressure will only increase. Equally powerful trends are pressing server applications, embedded systems, and data to adapt as well. The ideas I ve presented here are only one possible way forward, and I m sure there are more and better ideas brewing in distribution communities. I m sure that I m not the only one thinking about these problems. Whatever it looks like in the end, I have no doubt that change is ahead.

27 June 2010

Matt Zimmerman: Navigating the PolicyKit maze

I ve written a simple application which will automatically extract media from CDs and DVDs when they are inserted into the drive attached to my server. This makes it easy for me to compile all of my media in one place and access it anytime I like. The application uses the modern udisks API, formerly known as DeviceKit-disks, and I wrote it in part to learn get some experience working with udisks (which, it turns out, is rather nice indeed). Naturally, I wanted to grant this application the privileges necessary to mount, unmount and eject removable media. The server is headless, and the application runs as a daemon, so this would require explicit configuration. udisks uses PolicyKit for authorization, so I expected this to be very simple to do. In fact, it is very simple, but finding out exactly how to do it wasn t quite so easy. The Internet is full of web pages which recommend editing /etc/PolicyKit/PolicyKit.conf. As far as I can tell, nothing pays attention to this file anymore, and all of these instructions have been rendered meaningless. My system was also full of tools like polkit-auth, from the apparently-obsolete policykit package, which kept their configuration in some other ignored place, i.e. /var/lib/PolicyKit. It seems the configuration system has been through a revolution or two recently. In Ubuntu 10.04, the right place to configure these things seems to be /var/lib/polkit-1/localauthority, and this is documented in pklocalauthority(8). Authorization can be tested using pkcheck(1), and the default policy can be examined using pkaction(1). I solved my problem by creating a file in /var/lib/polkit-1/localauthority/50-local.d with a .pkla extension with the following contents:
[Access to removable media for the media group]
Identity=unix-group:media
Action=org.freedesktop.udisks.drive-eject;org.freedesktop.udisks.filesystem-mount
ResultAny=yes
This took effect immediately and did exactly what I needed. I lost quite some time trying to figure out why the other methods weren t working, so perhaps this post will save the next person a bit of time. It may also inspire some gratitude for the infrastructure which makes all of this work automatically for more typical usage scenarios, so that most people don t need to worry about any of this. Along the way, I whipped up a patch to add a --eject option to the handy udisks(1) tool, which made it easier for me to test along the way.

23 June 2010

Matt Zimmerman: Habit forming

I find that habits are best made and broken in sets. If I want to form a new habit, I ll try to get rid of an old one at the same time. I don t know why this works, but it seems to. Perhaps I only have room in my head for a certain number of habits, so if I want a new one, then an old one has to go. I m sure some combinations are better than others. I m currently working on changing some habits, including: I m thinking of adding a reading habit to the set, but it s going well so far and I don t want to overdo it. I feel good, and am forming a new routine. The flossing is definitely the hardest of the three. I hate pretty much everything about flossing. It also unbalances the set, so that I have a net gain of one habit. Maybe that s the real reason, and if I broke another habit, it would get easier. Does anyone else have this experience? What sort of tricks do you employ to help you change your behavior?

21 June 2010

Matt Zimmerman: Finishing books

Having invested in some introspection into my reading habits, I made up my mind to dial down my consumption of bite-sized nuggets of online information, and finish a few books. That s where my bottleneck has been for the past year or so. Not in selecting books, not in acquiring books, and not in starting books either. I identify promising books, I buy them, I start reading them, and at some point, I put them down and never pick them back up again. Until now. Over the weekend, I finished two books. I started reading both in 2009, and they each required my sustained attention for a period measured in hours in order to finish them. Taking a tip from Dustin, I decided to try alternating between fiction and non-fiction. Jitterbug Perfume by Tom Robbins This was the first book I had read by Tom Robbins, and I am in no hurry to read any more. It certainly wasn t without merit: its themes were clever and artfully interwoven, and the prose elicited a silent chuckle now and again. It was mainly the characters which failed to earn my devotion. They spoke and behaved in ways I found awkward at best, and problematic at worst. Race, gender, sexuality and culture each endured some abuse on the wrong end of a pervasive white male heteronormative American gaze. I really wanted to like Priscilla, who showed early promise as a smart, self-reliant individual, whose haplessness was balanced by a strong will and sense of adventure. Unfortunately, by the later chapters, she was revealed as yet another vacant vessel yearning to be filled by a man. She s even the steward of a symbolic, nearly empty perfume bottle throughout the book. Yes, really. Managing Humans by Michael Lopp Of the books I ve read on management, this one is perhaps the most outrageously reductionist. Many management books are like this, to a degree. They take the impossibly complex problem domain of getting people to work together, break it down into manageable problems with tidy labels, and prescribe methods for solving them (which are hopefully appropriate for at least some of the reader s circumstances). Managing Humans takes this approach to a new level, drawing neat boxes around such gestalts as companies, roles, teams and people, and assigning them Proper Nouns. Many of these bear a similarity to concepts which have been defined, used and tested elsewhere, such as psychological types, but the text makes no effort to link them to his own. Despite being a self-described collection of tales , it s structured like a textbook, ostensibly imparting nuggets of managerial wisdom acquired through lessons learned in the Real World (so pay attention!). However, as far as I can tell, the author s experience is limited to a string of companies of a very specific type: Silicon Valley software startups in the dot com era. Lopp (also known as Rands) does have substantial insight into this problem domain, though, and does an entertaining job of illustrating the patterns which have worked for him. If you can disregard the oracular tone, grit your teeth through the gender stereotyping, and add an implicit preface that this is (sometimes highly) context-sensitive advice, this book can be appreciated for what it actually is: a coherent, witty and thorough exposition of how one particular manager does their job. I got some good ideas out of this book, and would recommend it to someone working in certain circumstances, but as with Robbins, I m not planning to track down further work by the same author.

12 June 2010

Matt Zimmerman: How to decide what to read (and what not to read)?

Like you, dear Internet readers, I have no shortage of reading material. I have ready access to more engaging, high quality, informative and relevant information than I can possibly digest. Every day, I have to choose what to read, and what to pass by. This seems like an important thing to do well, and I wonder if I do a good enough job of it. This is just one example of a larger breadth/depth problem, but I m finding the general problem difficult to stomach, so I m focusing on reading for the moment. These are my primary sources of reading material on a day-to-day basis: How do you decide what to read, and what not to read? How does your experience differ between your primary information sources? How have you tried to improve?

8 June 2010

Matt Zimmerman: DevOps and Cloud

DevOps I first heard about DevOps from Lindsay Holmwood at linux.conf.au 2010. Since then, I ve been following the movement with interest. It seems to be about cross-functional involvement in software teams, specifically between software development and system administration (or operations). In many organizations, especially SaaS shops, these two groups are placed in opposition to each other: developers are driven to deliver new features to users, while system administrators are held accountable for the operation of the service. In the best case, they maintain a healthy balance by pushing in opposite directions, but more typically, they resent each other for getting in the way, as a result of this dichotomy:
Development Operations
is responsible for creating products offering services
is measured on delivery of new features high reliability
optimizes by increasing velocity controlling change
and so is perceived as reckless and irresponsible obstructing progress
Of course, both functions are essential to a viable service, and so DevOps aims to replace this opposition with cooperation. By removing this friction from the organization, we hope to improve efficiency, lower costs, and generally get more work done.
So, DevOps promotes the formation of cross-functional teams, where individuals still take on specialist development or operations roles, but work together toward the common goal of delivering a great experience to users. By working as teammates, rather than passing work over the wall , they can both contribute to development, deployment and maintenance according to their skills and expertise. The team becomes a devops team, and is responsible for the entire product life cycle. Particular tasks may be handled by specialists, but when there s a problem, it s the team s problem. Some take it a step further, and feel that what s needed is to combine the two disciplines, so that individuals contribute in both ways. Rather than thinking of themselves as developers or sysadmins , these folks consider themselves devops . They work to become proficient in both roles, and to synthesize new ways of working by drawing on both types of skills and experience. A common crossover activity is the development of sophisticated tools for automating deployment, monitoring, capacity management and failure resolution. DevOps meets Cloud Like DevOps, cloud is not a specific technology or method, but a reorganization of the model (as I ve written previously). It s about breaking down the problem in a different way, splitting and merging its parts, and creating a new representation which doesn t correspond piece-for-piece to the old one. DevOps drives cloud because it offers a richer toolkit for the way they work: fast, flexible, efficient. Tools like Amazon EC2 and Google App Engine solve the right sorts of problems. Cloud also drives DevOps because it calls into question the traditional way of organizing software teams. A development/operations division just doesn t fit cloud as well as a DevOps model. Deployment is a classic duty of system administrators. In many organizations, only the IT department can implement changes in the production environment. Reaping the benefits of an IaaS environment requires deploying through an API, and therefore deployment requires development. While it is already common practice for system administrators to develop tools for automating deployment, and tools like Puppet and Chef are gaining momentum, IaaS makes this a necessity, and raises the bar in terms of sophistication. Doing this well requires skills and knowledge from both sides of the fence between development and operations, and can accelerate development as well as promote stability in production.
This is exemplified by infrastructure service providers like Amazon Web Services, where customers pay by the hour for black box access to computing resources. How those resources are provisioned and maintained is entirely Amazon s problem, while its customers must decide how to deploy and manage their applications within Amazon s IaaS framework. In this scenario, some operations work has been explicitly outsourced to Amazon, but IaaS is not a substitute for system administration. Deployment, monitoring, failure recovery, performance management, OS maintenance, system configuration, and more are still needed. A development team which is lacking the experience or capacity for this type of work cannot simply switch to an IaaS model and expect these needs to be taken care of by their service provider.
With platform service providers, the boundaries are different. Developers, if they build their application on the appropriate platform, can effectively outsource (mostly) the management of the entire production environment to their service provider. The operating system is abstracted away, and its maintenance can be someone else s problem. For applications which can be built with the available facilities, this will be a very attractive option for many organizations. The customers of these services may be traditional developers, who have no need for operations expertise. PaaS providers, though, will require deep expertise in both disciplines in order to build and improve their platform and services, and will likely benefit from a DevOps approach. Technical architecture draws on both development and operations expertise, because design goals like performance and robustness are affected by all layers of the stack, from hardware, power and cooling all the way up to application code. DevOps itself promotes greater collaboration on architecture, by involving experts in both disciplines, but cloud is a great catalyst because cloud architecture can be described in code. Rather than talking to each other about their respective parts of the system, they can work together on the whole system at once. Developers, sysadmins and hybrids can all contribute to a unified source tree, containing both application code and a description of the production environment: how many virtual servers to deploy, their specifications, which components run on which servers, how they are configured, and so on. In this way, system and network architecture can evolve in lockstep with application architecture. Cloudy promises such as dynamic scaling and fault tolerance call for a DevOps approach in order to be realized in a real-world scenario. These systems involve dynamically manipulating production infrastructure in response to changing conditions, and the application must adapt to these changes. Whether this takes the form of an active, intelligent response or a passive crash-only approach, development and operational considerations need to be aligned. So what? DevOps and cloud will continue to reinforce each other and gain momentum. Both individuals and organizations will need to adapt in order to take advantage of the opportunities provided by these new models. Because they re complementary, it makes sense to adopt them together, so those with expertise in both will be at an advantage.

5 June 2010

Julien Valroff: Goodbye Facebook

Today, I have finally permanently deleted my Facebook account. I have followed Matt Zimmerman s procedure, and it seems everything went well.
Now, I just need to not login to Facebook over the next 14 days, which shouldn t be a problem at all (my last connection was something like 2 or 3 months ago!). My aim is now to make more and more people use identi.ca as their primary social network. I will also follow the Diaspora project. My identi.ca profile : http://identi.ca/julienvalroff

31 May 2010

Matt Zimmerman: Summary of development plans for Ubuntu 10.10

With the 10.10 developer summit behind us, several teams have published engineering plans for the 10.10 release cycle, including:

29 May 2010

Matt Zimmerman: Extracting files from a nandroid backup using unyaffs

I recently upgraded my G1 phone to the latest Cyanogen build (5.x). Since the upgrade instructions recommend wiping user data, I made a nandroid backup first, using the handy Amon_RA recovery image. I ve gotten pretty familiar with the Android filesystem layout, and was confident I could restore anything I really missed (such as my wpa_supplicant.conf with all of my WiFi credentials). It wasn t until I finished with the upgrade that I realized the backup wasn t trivial to work with. It s a raw yaffs2 flash image, which can t be simply mounted on a loop device. After messing around for a bit with the nandsim module, mtd-utils and the yaffs2 kernel module, I realized there was a much simpler way: the unassuming unyaffs. It says that it can only extract images created by mkyaffs2image, but apparently the images in the nandroid backup are created this way (or otherwise compatible with unyaffs). So I downloaded and built unyaffs:
svn checkout http://unyaffs.googlecode.com/svn/trunk/ unyaffs
cd unyaffs
gcc -o unyaffs unyaffs.c
and then ran it on the backup image:
mkdir g1data && cd g1data # unyaffs extracts into the current directory
~/src/android/unyaffs/unyaffs /media/G1data/nandroid/HT839GZ23983/BCDS-20100529-1311/data.img
At which point I could restore files one by one, e.g.:
adb push /tmp/g1data/misc/wifi/wpa_supplicant.conf /data/misc/wifi/
After toggling WiFi off and then back on, all of my credentials were restored. I was able to restore preferences for various applications in the same way.

27 May 2010

Matt Zimmerman: Rethinking the Ubuntu Developer Summit

This is a repost from the ubuntu-devel mailing list, where there is probably some discussion happening by now. After each UDS, the organizers evaluate the event and consider how it could be further improved in the future. As a result of this process, the format of UDS has evolved considerably, as it has grown from a smallish informal gathering to a highly structured matrix of hundreds of 45-to-60-minute sessions with sophisticated audiovisual facilities. If you participated in UDS 10.10 (locally or online), you have hopefully already completed the online survey, which is an important part of this evaluation process. A survey can t tell the whole story, though, so I would also like to start a more free-form discussion here among Ubuntu developers as well. I have some thoughts I d like to share, and I m interested in your perspectives as well. Purpose The core purpose of UDS has always been to help Ubuntu developers to explore, refine and share their plans for the subsequent release. It has expanded over the years to include all kinds of contributors, not only developers, but the principle remains the same. We arrive at UDS with goals, desires and ideas, and leave with a plan of action which guides our work for the rest of the cycle. The status quo UDS looks like this: This screenshot is only 1600 1200, so there are another 5 columns off the right edge of the screen for a total of 18 rooms. With 7 time slots per day over 5 days, there are over 500 blocks in the schedule grid. 9 tracks are scattered over the grid. We produce hundreds of blueprints representing projects we would like to work on. It is an impressive achievement to pull this event together every six months, and the organizers work very hard at it. We accomplish a great deal at every UDS, and should feel good about that. We must also constantly evaluate how well it is working, and make adjustments to accommodate growth and change in the project. How did we get here? (this is all from memory, but it should be sufficiently accurate to have this discussion) In the beginning, before it was even called UDS, we worked from a rough agenda, adding items as they came up, and ticking them off as we finished talking about them. Ad hoc methods worked pretty well at this scale. As the event grew, and we held more and more discussions in parallel, it was hard to keep track of who was where, and we started to run into contention. Ubuntu and Launchpad were planning their upcoming work together at the same time. One group would be discussing topic A, and find that they needed the participation of person X, who was already involved in another discussion on topic B. The A group would either block, or go ahead without the benefit of person X, neither of which was seen to be very effective. By the end of the week, everyone was mentally and physically exhausted, and many were ill. As a result, we decided to adopt a schedule grid, and ensure that nobody was expected to be in two places at once. Our productivity depended on getting precisely the right people face to face to tackle the technical challenges we faced. This meant deciding in advance who should be present in each session, and laying out the schedule to satisfy these constraints. New sessions were being added all the time, so the UDS organizers would stay up late at night during the event, creating the schedule grid for the next day. In the morning, over breakfast, everyone would tell them about errors, and request revisions to the schedule. Revisions to the schedule were painful, because we had to re-check all of the constraints by hand. So, in the geek spirit, we developed a program which would read in the constraints and generate an error-free schedule. The UDS organizers ran this at the end of each day during the event, checked it over, and posted it. In the morning, over breakfast, everyone would tell them about constraints they hadn t been aware of, and request revisions to the schedule. Revisions to the schedule were painful, because a single changed constraint would completely rearrange the schedule. People found themselves running all over the place to different rooms throughout the day, as they were scheduled into many different meetings back-to-back. At around this point, UDS had become too big, and had too many constraints, to plan on the fly (unconference style). We resolved to plan more in advance, and agree on the scheduling constraints ahead of time. We divided the event into tracks, and placed each track in its own room. Most participants could stay in one place throughout the day, taking part in a series of related meetings except where they were specifically needed in an adjacent track. We created the schedule through a combination of manual and automatic methods, so that scheduling constraints could be checked quickly, but a human could decide how to resolve conflicts. There was time to review the schedule before the start of the event, to identify and fix problems. Revisions to the schedule during the event were fewer and less painful. We added keynote presentations, to provide opportunities to communicate important information to everyone, and ease back into meetings after lunch. Everyone was still exhausted and/or ill, and tiredness took its toll on the quality of discussion, particularly toward the end of the week. Concerns were raised that people weren t participating enough, and might stay on in the same room passively when they might be better able to contribute to a different session happening elsewhere. As a result, the schedule was randomly rearranged so that related sessions would not be held in the same room, and everyone would get up and move at the end of each hour. This brings us roughly to where things stand today. Problems with the status quo
  1. UDS is big and complex. Creating and maintaining the schedule is a lot of work in itself, and this large format requires a large venue, which in turn requires more planning and logistical work (not to mention cost). This is only worthwhile if we get proportionally more benefit out of the event itself.
  2. UDS produces many more blueprints than we need for a cycle. While some of these represent an explicit decision not to pursue a project, most of them are set aside simply because we can t fit them in. We have the capacity to implement over 100 blueprints per cycle, but we have *thousands* of blueprints registered today. We finished less than half of the blueprints we registered for 10.04. This means that we re spending a lot of time at UDS talking about things which can t get done that cycle (and may never get done).
  3. UDS is (still) exhausting. While we should work hard, and a level of intensity helps to energize us, I think it s a bit too much. Sessions later in the week are substantially more sluggish than early on, and don t get the full benefit of the minds we ve brought together. I believe that such an intense format does not suit the type of work being done at the event, which should be more creative and energetic.
  4. The format of UDS is optimized for short discussions (as many as we can fit into the grid). This is good for many technical decisions, but does not lend itself as well to generating new ideas, deeply exploring a topic, building broad consensus or tackling big picture issues facing the project. These deeper problems sometimes require more time. They also benefit tremendously from face-to-face interaction, so UDS is our best opportunity to work on them, and we should take advantage of it.
  5. UDS sessions aim for the minimum level of participation necessary, so that we can carry on many sessions in parallel: we ask, who do we need in order to discuss this topic? This is appropriate for many meetings. However, some would benefit greatly from broader participation, especially from multiple teams. We don t always know in advance where a transformative idea will come from, and having more points of view represented would be valuable for many UDS topics.
  6. UDS only happens once per cycle, but design and planning need to continue throughout the cycle. We can t design everything up front, and there is a lot of information we don t have at the beginning. We should aim to use our time at UDS to greatest effect, but also make time to continue this work during the development cycle. design a little, build a little, test a little, fly a little
Proposals
  1. Concentrate on the projects we can complete in the upcoming cycle. If we aren t going to have time to implement something until the next cycle, the blueprint can usually be deferred to the next cycle as well. By producing only moderately more blueprints than we need, we can reduce the complexity of the event, avoid waste, prepare better, and put most of our energy into the blueprints we intend to use in the near future.
  2. Group related sessions into clusters, and work on them together, with a similar group of people. By switching context less often, we can more easily stay engaged, get less fatigued, and make meaningful connections between related topics.
  3. Organize for cross-team participation, rather than dividing teams into tracks. A given session may relate to a Desktop Edition feature, but depends on contributions from more than just the Desktop Team. There is a lot of design going on at UDS outside of the Design track. By working together to achieve common goals, we can more easily anticipate problems, benefit from diverse points of view, and help each other more throughout the cycle.
  4. Build in opportunities to work on deeper problems, during longer blocks of time. As a platform, Ubuntu exists within a complex ecosystem, and we need to spend time together understanding where we are and where we are going. As a project, we have grown rapidly, and need to regularly evaluate how we are working and how we can improve. This means considering more than just blueprints, and sometimes taking more than an hour to cover a topic.

26 May 2010

Matt Zimmerman: Using Mumble with a bluetooth headset

At Canonical, we ve started experimenting with Mumble as an alternative to telephone calls for real-time conversations. The operating model is very much like IRC, based on channels within which everyone can hear everyone else as they speak. Mumble works best with a headset, which offers better audio recording quality due to the proximity of the microphone, and avoids problems with echo and feedback. I like to pace around while I talk, and so I ve already invested in a Plantronics Calisto Pro, which includes a DECT handset, a Bluetooth headset and a nice charging base. My laptop has bluetooth onboard, so I set about trying to get Mumble set up to use the headset via bluetooth. The first thing I tried was to click on the bluetooth icon on the panel, and select Set up new device.... After setting the headset to pairing mode, I waited quite some time for it to show up in the list, but it never did. After opening the preferences dialog, I discovered that this was (presumably) because I had already paired it, ages ago. So, I went about trying to get PulseAudio to talk to it. After some hunting, I tried:
pactl load-module module-bluetooth-device address=00:23:xx:xx:xx:xx
This created a new Card in PulseAudio, which I could see in the Hardware tab of the Sound Preferences dialog and in pactl list, but it was inactive:
Card #1
Name: bluez_card.00_23_XX_XX_XX_XX
Driver: module-bluetooth-device.c
Owner Module: 17
Properties:
device.description = "Calisto PLT"
device.string = "00:23:XX:XX:XX:XX"
device.api = "bluez"
device.class = "sound"
device.bus = "bluetooth"
device.form_factor = "headset"
bluez.path = "/org/bluez/13899/hci0/dev_00_23_XX_XX_XX_XX"
bluez.class = "0x200404"
bluez.name = "Calisto PLT"
device.icon_name = "audio-headset-bluetooth"
Profiles:
hsp: Telephony Duplex (HSP/HFP) (sinks: 1, sources: 1, priority. 20)
off: Off (sinks: 0, sources: 0, priority. 0)
Active Profile: off
The Hardware tab confirmed that the device was Disabled, and using the Off profile. I could manually select the Telephony Duplex (HSP/HFP) profile, but this had no apparent effect. There were no Sources or Sinks to send or receive audio data to or from the headset (and thus nothing new in the Input or Output tabs of the preferences dialog). syslog hinted:
pulseaudio[15239]: module-bluetooth-device.c: Default profile not connected, selecting off profile
At this point, I recalled that when I suspend this laptop, the bluetooth driver gets unhappy. I don t commonly use bluetooth on this laptop, so I hypothesized that the driver was in a weird state, and I decided to try unloading and reloading the btusb module. Once I did so, the device showed up in the panel menu, with a Connect menu item. Aha! The manual module loading above may turn out to be unnecessary if the device shows up in the menu initially. I selected the Connect menu item, and a bunch of magic happened, with the result that I heard the headset s tone to indicate it was activated. Sound Preferences now showed it under Hardware as active using the Telephony Duplex profile, and it appeared under Input and Output as well. pactl list showed its sources and sinks. Mumble offered it as a choice for the input and output device. Progress! Some experimentation (thanks, Colin) revealed that other people could hear me through the headset just fine. However (Problem #1), I couldn t hear them clearly through the headset. If I switch Mumble s output to my speakers, they sound fine, so it s not Mumble. So I tried:
paplay -d bluez_sink.00_23_XX_XX_XX_XX /usr/share/sounds/alsa/Front_Center.wav
which also sounds awful. There is a whining noise, which gets louder when the audio signal is louder, which makes it very difficult to hear. I don t know if the problem is with PulseAudio, bluez, the kernel, or the device, but using speakers for output is a workaround. This does seem to cause some echo, though, so I ll need to track this down eventually. Problem #2 is that Mumble seems to prevent PulseAudio from suspending the headset s source. Even if I set it to Push to Talk mode, the headset stays active all the time, which will drain the battery. PulseAudio seems to do the right thing, and kill the radio link, if the source is left idle, and brings it up when there is activity, so this looks to be Mumble s fault. I ll need to fix this as well.

25 May 2010

Matt Zimmerman: The behavioral economics of free software

People who use and promote free software cite various reasons for their choice, but do those reasons tell the whole story? If, as a community, we want free software to continue to grow in popularity, especially in the mainstream, we should understand better the true reasons for choosing it especially our own. Some believe that it offers higher quality, that the availability of source code results in a better product with higher reliability. Although it s difficult to do an apples-to-apples comparison of software, there are certainly instances where free software components have been judged superior to their proprietary counterparts. I m not aware of any comprehensive analysis of the general case, though, and there is plenty of anecdotal evidence on both sides of the debate. Others prefer it for humanitarian reasons, because it s better for society or brings us closer to the world we want to live in. These are more difficult to analyze objectively, as they are closely linked to the individual, their circumstances and their belief system. For developers, a popular reason is the possibility of modifying the software to suit their needs, as enshrined in the Free Software Foundation s freedom 1. This is reasonable enough, though the practical value of this opportunity will vary greatly depending on the software and circumstances. The list goes on: cost savings, educational benefits, universal availability, social rewards, etc. The wealth of evidence of cognitive bias indicates that we should not take these preferences at face value. Not only are human choices seldom rational, they are rarely well understood even by the human themselves. When asked to explain our preferences, we often have a ready answer indeed, we may never run out of reasons but they may not withstand analysis. We have many different ways of fooling ourselves with regard to our own past decisions and held beliefs, as well as those of others. Behavioral economics explores the way in which our irrational behavior affects economies, and the results are curious and subtle. For example, the riddle of experience versus memory (TED video), or the several examples in The Marketplace of Perception (Harvard Magazine article). I think it would be illuminating to examine free software through this lens, and consider that the vagaries of human perception may have a very strong influence on our choices. Some questions for thought: If you re aware of any studies along these lines, I would be interested to read about them.

Next.

Previous.